Understanding and Meaning

Expanding on previous post a bit more to unpack what I think Understanding and Meaning have to do with our conversations around AI.

So my last post got "Downes'd", which is nice, but at the same time, Stephen's commentary can be challenging. In his brief commentary, he laid out his issue:

OK, sounds great, until you push a bit on what counts exactly as 'understanding' and 'meaning'. What is 'understanding'? Knowledge of causal principles? Not robust enough. General laws and principles? Too inflexible. A model or world view? Sure, now we're getting closer. But that's what AIs do! Kids learning 'why' - what sort of answer do you give them? Cause, principles, theory. Right? So what is there to 'understanding' that AIs don't do. The same sort of questions arise around 'meaning'. Do we mean 'intentionality'? 'Intensionality?' "emotions or tone" Aw, we already know AI can respond to these. It's too easy to simply say "understanding & meaning."

So, let me expand a little more, as I think it's useful to delve a little deeper.

Stephen provides a couple of options for framing understanding, but I'd like to keep it simple. Understanding is not just the knowledge of a thing; you must also grasp its cause or explanation. It is a deeper sense of what the thing is, but also how it has come to be. It isn't just knowing the definition of the words but the context in which you would use them.

Let's apply that definition: The thing in the example of ChatGPT is text. Yes, it knows that object quite well and intimately. But it doesn't grasp cause or explanation. As I said:

ChatGPT didn't learn the topic, it did no research, it did no validation, and it contributed no novel thoughts, ideas or concepts.

Yes, it can demonstrate knowledge of a specific domain. A model of that knowledge, yes. However, a model is not the same as understanding. The 'worldview' this AI has isn't worldly, connected or contextual - it is within a specific domain. There is no connection to other models or concepts or bodies of knowledge or to the world it inhabits. Those are the elements that generate understanding. These particular models are purely mathematical - they are calculations of the most probable correct answer. This is what Large Language Models do - calculate. They don't do anything else to achieve their end results - which can be impressive and look like intelligence created them - but that's not how they were achieved. The LLM never understood your request, it just performed a series of complex pattern matching and weighted the results to what calculated to the most correct answer.

Understanding on the other hand, based on the definition about, would include triangulating that answer more broadly and across a broader area of knowledge, experience, connections, and contexts. It would also include coming back without an answer - or with additional questions and clarifications. Or not providing an answer if the context and what was asked was wrong or inappropriate.

AI, as it stands, has none of those features so it cannot understand.


When it comes to meaning, Stephen offers a couple more suggestions:

Do we mean 'intentionality'? 'Intensionality?' "emotions or tone"[?]

For me, meaning has more to do with seeking an answer, asking why, and seeking to find out about the cause and effect and reason for things to occur. In that sense, it is about intentionality and less about the others. Seeking out and finding meaning is beyond an algorithms capability because it is not sentient, alive or intelligence. Stephen's next line illustrates that perfectly:

Aw, we already know AI can respond to these

Respond — exactly. It doesn't engage in making meaning, it can't because its just calculations, and even they have to be kicked off by an executable command. There's also oodles of examples that show how badly generative tools can respond. They don't understand the meaning of math, humour, sarcasm, fingers, nuance or language as a vessel for emotion. But we can and do.

There isn't a single step in the algorithm that involves making meaning from what it outputs. It is pattern matching. It's good at that yes, but that is not making meaning. Readable, legible, spelling and grammar - they are patterns. Meaning is not. AI is just code — anything beyond that, and it is us creating (or buying in) the illusion. These are the parlour tricks I've mentioned - it looks intelligent, and if we speak of it as intelligence, we lend credence to it being thought of as intelligence. But it isn't. It's just code.

Now look, I'll be honest - I'm getting outside my comfort zone here. I don't have the background to jump into a philosophical debate around the different interpretations of meaning and understanding, as they seem like contested concepts across multiple disciplines. What I contend is that regardless of that nuance — what is being done by generative AI is not an example of intelligence. That all of the generative tools out there today do not represent anything intelligent. They don't function like that.

There is a huge amount of confusion about what "AI" does, how it does it, and what it seems to do. There is a mislabelling (which is intentional) to rebrand machine learning and other big data concepts as "intelligence". By labelling this technology intelligence, we perform the gestalt and apply the concepts that sit alongside it. We apply traits and behaviours that don't actually exist. By doing so we become part of the hallucination of AI.


Doug Belshaw linked out to a great post by Jennifer Moore who makes this argument much better than I. She goes into the operations of AI, how an LLM works. I'd already picked this up from my own research and experience, but I don't think I could have explained it as simply as she does.